Guided Investigation - Azure WAF SQLI.ipynb (989 lines of code) (raw):
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Azure WAF SQLI Incident Triage Notebook\n",
"\n",
"- Version: 1.0\n",
"- Data Sources Required: AzureDiagnostics, SecurityAlert\n",
"\n",
"This Notebook is designed to help you triage incidents generated from Azure Front Door Web Application Firewall (WAF) SQL injection (SQLI) events. <br>\n",
"You can use it to help determine if these incidents are True Positive, Benign Positive or False Positive and if False Positive add additional exclusions to your WAF policy to prevent further occurrence.<br>\n",
"\n",
"In order to use this Notebook you need to have Analytics generating incidents related to Azure Front Door WAF SQLI events in your Sentinel workspace, as well as permissions to access and update WAF rules in Front Door.<br>\n",
"\n",
"More details about Azure Front Door WAF can be found here: https://learn.microsoft.com/en-us/azure/web-application-firewall/afds/afds-overview\n",
"\n",
"---------------------------------------------------------\n",
"\n",
"### Notebook initialization\n",
"\n",
"Before running this notebook ensure you have MSTICPy installed with the Azure extras.\n",
"\n",
"The next cell:\n",
"- Imports the required packages into the notebook\n",
"- Sets a number of configuration options.\n",
"\n",
"<details>\n",
" <summary>More details...</summary>\n",
" \n",
"This should complete without errors. If you encounter errors or warnings look at the following two notebooks:<br>\n",
" - [TroubleShootingNotebooks](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/TroubleShootingNotebooks.ipynb)<br>\n",
" - [ConfiguringNotebookEnvironment](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb)<br>\n",
" \n",
"If you are running in the Microsoft Sentinel Notebooks environment (Azure Notebooks or Azure ML) you can run live versions of these notebooks:\n",
" - [Run TroubleShootingNotebooks](./TroubleShootingNotebooks.ipynb)<br>\n",
" - [Run ConfiguringNotebookEnvironment](./ConfiguringNotebookEnvironment.ipynb)<br>\n",
" \n",
"You may also need to do some additional configuration to successfully use functions such as Threat Intelligence service lookup and Geo IP lookup. There are more details about this in the ConfiguringNotebookEnvironment notebook and in these documents:<br>\n",
" - [msticpy configuration](https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html)<br>\n",
" - [Threat intelligence provider configuration](https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html#configuration-file)<br>\n",
"</details>\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import msticpy as mp\n",
"import httpx\n",
"import json\n",
"import ipywidgets as widgets\n",
"from IPython.display import HTML\n",
"from msticpy.nbwidgets import SelectAlert\n",
"from msticpy.vis.entity_graph_tools import EntityGraph\n",
"from datetime import datetime, timezone, timedelta\n",
"from msticpy.common.exceptions import MsticpyException\n",
"from msticpy.nbwidgets import Progress\n",
"\n",
"mp.init_notebook()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Default Parameters\n",
"ws_name = \"Default\"\n",
"incident_id = None\n",
"end = datetime.now(timezone.utc) + timedelta(hours=1)\n",
"start = end - timedelta(days=30)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Authenticate to Microsoft Sentinel APIs and Select Subscriptions\n",
"\n",
"The notebook is expecting your Microsoft Sentinel Tenant ID, Subscription ID, Resource Group name, Workspace name, and Workspace ID to be configured in msticpyconfig.yaml in the current folder or location specified by MSTICPYCONFIG environment variable.<br>\n",
"For help with setting up your msticpyconfig.yaml file see the Setup section at the end of this notebook, the [ConfigureNotebookEnvironment](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb) notebook or https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html \n",
"\n",
"These cells connect to the Microsoft Sentinel APIs and the Log Analytics data store behind it.<br>\n",
"In order to use this the user must have at least read permissions on the Microsoft Sentinel workspace.<br>\n",
"Select the Workspace you want to connect to from the list of workspaces configured in your msticpyconfig.yaml file and then authenticate to this workspace.<br>\n",
"\n",
"Note: you may be asked to authenticate twice, once for the APIs and once for the Log Analytics workspace."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\n",
" \"Configured workspaces: \",\n",
" \", \".join(mp.settings.get_config(\"AzureSentinel.Workspaces\").keys()),\n",
")\n",
"import ipywidgets as widgets\n",
"\n",
"ws_param = widgets.Combobox(\n",
" description=\"Workspace Name\",\n",
" value=ws_name,\n",
" options=list(mp.settings.get_config(\"AzureSentinel.Workspaces\").keys()),\n",
")\n",
"ws_param\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ws_name = ws_param.value\n",
"sent_prov = mp.MicrosoftSentinel(workspace=ws_name)\n",
"sent_prov.connect()\n",
"qry_prov = mp.QueryProvider(\"MSSentinel\")\n",
"qry_prov.connect(mp.WorkspaceConfig(ws_name))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get SQLI Incidents\n",
"\n",
"The first step of the investigation is to find the Azure Front Door WAF SQLI incidents to triage, to do that we look for any incidents generated from Analytics looking at SQLI events from WAF logs.\n",
"\n",
"Review the details of incidents below and select one to triage further."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Format and display incident details\n",
"def display_incident(incident):\n",
" details = f\"\"\"\n",
" <h3>Selected Incident: {incident['title']},</h3>\n",
" <b>Incident time: </b> {incident['createdTimeUtc']} -\n",
" <b>Severity: </b> {incident['severity']} -\n",
" <b>Assigned to: </b>{incident['properties.owner.userPrincipalName']} -\n",
" <b>Status: </b> {incident['status']}\n",
" \"\"\"\n",
" new_idx = [idx.split(\".\")[-1] for idx in incident.index]\n",
" incident.set_axis(new_idx, copy=False)\n",
" return (HTML(details), pd.DataFrame(incident))\n",
"\n",
"\n",
"# Find WAF SQLI analytics deployed in the workspace\n",
"analytics = sent_prov.list_analytic_rules()\n",
"if analytics.empty:\n",
" raise MsticpyException(\"No Analytics found in this workspace\")\n",
"else:\n",
" sqli_analytics = analytics[\n",
" (\n",
" analytics[\"properties.query\"].str.contains(\n",
" \"FrontDoorWebApplicationFirewallLog\"\n",
" )\n",
" | analytics[\"properties.query\"].str.contains(\n",
" \"ApplicationGatewayFirewallLog\"\n",
" )\n",
" )\n",
" & (\n",
" analytics[\"properties.query\"].str.contains(\"SQLI\")\n",
" | analytics[\"properties.query\"].str.contains(\"SQL Injection\")\n",
" )\n",
" ]\n",
" sqli_analytics_ids = sqli_analytics[\"id\"].unique()\n",
"\n",
"# Find incidents triggered by these analytics\n",
"incidents = sent_prov.list_incidents()\n",
"if incidents.empty:\n",
" raise MsticpyException(\"No Incidents found in this workspace\")\n",
"else:\n",
" sqli_mask = incidents[\"properties.relatedAnalyticRuleIds\"].apply(\n",
" lambda x: any(\n",
" [\n",
" item\n",
" for item in sqli_analytics_ids\n",
" if item.lower() in [analytic.lower() for analytic in x]\n",
" ]\n",
" )\n",
" )\n",
" sqli_incidents = incidents[sqli_mask]\n",
" sqli_incidents.rename(\n",
" columns={\n",
" \"properties.title\": \"title\",\n",
" \"properties.status\": \"status\",\n",
" \"properties.severity\": \"severity\",\n",
" \"properties.createdTimeUtc\": \"createdTimeUtc\",\n",
" },\n",
" inplace=True,\n",
" )\n",
" sqli_incidents.mp_plot.timeline(\n",
" title=\"SQLI Incidents\",\n",
" group_by=\"severity\",\n",
" source_columns=[\"title\", \"status\", \"severity\"],\n",
" time_column=\"createdTimeUtc\",\n",
" )\n",
"\n",
"# Allow user to select the incident they want to focus on and display the details of the alert once selected\n",
"md(\"Select an incident to triage:\", \"bold\")\n",
"alert_sel = SelectAlert(\n",
" alerts=sqli_incidents,\n",
" default_alert=incident_id,\n",
" columns=[\"title\", \"severity\", \"status\", \"name\"],\n",
" time_col=\"createdTimeUtc\",\n",
" id_col=\"id\",\n",
" action=display_incident,\n",
")\n",
"alert_sel.display()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review details of the incident\n",
"\n",
"Review the details below to understand the core details of the incident selected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"incident_details = sent_prov.get_incident(\n",
" alert_sel.selected_alert.id.split(\"/\")[-1], entities=True, alerts=True\n",
")\n",
"ent_dfs = []\n",
"for ent in incident_details[\"Entities\"][0]:\n",
" ent_df = pd.json_normalize(ent[1])\n",
" ent_df[\"Type\"] = ent[0]\n",
" ent_dfs.append(ent_df)\n",
"\n",
"\n",
"if ent_dfs:\n",
" md(\"Incident Entities:\", \"bold\")\n",
" new_df = pd.concat(ent_dfs, axis=0, ignore_index=True)\n",
" grp_df = new_df.groupby(\"Type\")\n",
" for grp in grp_df:\n",
" md(grp[0], \"bold\")\n",
" display(grp[1].dropna(axis=1))\n",
"\n",
"alert_out = []\n",
"if \"Alerts\" in incident_details.columns:\n",
" md(\"Related Alerts:\", \"bold\")\n",
" for alert in incident_details.iloc[0][\"Alerts\"]:\n",
" qry = f\"SecurityAlert | where TimeGenerated between((datetime({start})-7d)..datetime({end})) | where SystemAlertId == '{alert['ID']}'\"\n",
"\n",
" df = qry_prov.exec_query(qry)\n",
" display(df)\n",
" if df.empty or not df[\"Entities\"].iloc[0]:\n",
" alert_full = {\"ID\": alert[\"ID\"], \"Name\": alert[\"Name\"], \"Entities\": None}\n",
" else:\n",
" alert_full = {\n",
" \"ID\": alert[\"ID\"],\n",
" \"Name\": alert[\"Name\"],\n",
" \"Entities\": json.loads(df[\"Entities\"].iloc[0]),\n",
" }\n",
" alert_out.append(alert_full)\n",
"\n",
" incident_details[\"Alerts\"] = [alert_out]\n",
"\n",
"md(\"Graph of incident entities:\", \"bold\")\n",
"graph = EntityGraph(incident_details.iloc[0])\n",
"graph.plot(timeline=True)\n",
"\n",
"incident_id = alert_sel.value[\"id\"]\n",
"rule_ids = incidents[incidents[\"id\"] == incident_id].iloc[0][\n",
" \"properties.relatedAnalyticRuleIds\"\n",
"]\n",
"rule_mask = analytics[\"id\"].apply(\n",
" lambda x: any(item for item in rule_ids if item.lower() in x.lower())\n",
")\n",
"incident_rules = analytics[rule_mask]\n",
"if len(incident_rules.index) > 1:\n",
" incident_query = \"\"\n",
" for rule in incident_rules.iterrows():\n",
" incident_query += rule[1][\"properties.query\"]\n",
"else:\n",
" incident_query = incident_rules.iloc[0][\"properties.query\"]\n",
"\n",
"front_door = False\n",
"app_gateway = False\n",
"if \"FrontDoorWebApplicationFirewallLog\" in incident_query:\n",
" front_door = True\n",
"if \"ApplicationGatewayFirewallLog\" in incident_query:\n",
" app_gateway = True\n",
"\n",
"if not front_door:\n",
" raise MsticpyException(\n",
" \"This notebook is designed to process Azure Front Door WAF events. Incidents that contain Application Gateway WAF events are not currently supported.\"\n",
" )\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review TI results\n",
"\n",
"The following cell takes any Entities associated with the Incident selected and checks if they appear in Threat Intelligence feeds to provide further context.<br>\n",
"Documentation on Incident entities can be found here: https://learn.microsoft.com/azure/sentinel/incident-investigation<br>\n",
"This cell uses MSTICPy's threat intelligence features and will use the providers configured in the msticpyconfig.yaml file. More details on this feature can be found here: https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ti = mp.TILookup()\n",
"sev = []\n",
"resps = pd.DataFrame()\n",
"\n",
"# For each entity look it up in Threat Intelligence data\n",
"md(\"Looking up entities in TI feeds...\")\n",
"prog = Progress(completed_len=len(incident_details[\"Entities\"].iloc[0]))\n",
"i = 0\n",
"result_dfs = []\n",
"for ent in incident_details[\"Entities\"].iloc[0]:\n",
" i += 1\n",
" prog.update_progress(i)\n",
" if ent[0] == \"Ip\":\n",
" resp = ti.lookup_ioc(ent[1][\"address\"], ioc_type=\"ipv4\")\n",
" result_dfs.append(ti.result_to_df(resp))\n",
" sev += resp[\"Severity\"].unique().tolist()\n",
" if ent[0] == \"Url\" or ent[0] == \"DnsResolution\":\n",
" if \"url\" in ent[1]:\n",
" lkup_dom = ent[1][\"url\"]\n",
" else:\n",
" lkup_dom = ent[1][\"domainName\"]\n",
" resp = ti.lookup_ioc(lkup_dom, ioc_type=\"url\")\n",
" result_dfs.append(ti.result_to_df(resp))\n",
" sev += resp[\"Severity\"].unique().tolist()\n",
" if ent[0] == \"FileHash\":\n",
" resp = ti.lookup_ioc(ent[1][\"hashValue\"])\n",
" result_dfs.append(ti.result_to_df(resp))\n",
" sev += resp[\"Severity\"].unique().tolist()\n",
" if result_dfs:\n",
" resps = pd.concat(result_dfs)\n",
" else:\n",
" resps = pd.DataFrame()\n",
"\n",
"# Take overall severity of the entities based on the highest score\n",
"if \"high\" in sev:\n",
" severity = \"High\"\n",
"elif \"warning\" in sev:\n",
" severity = \"Warning\"\n",
"elif \"information\" in sev:\n",
" severity = \"Information\"\n",
"else:\n",
" severity = \"None\"\n",
"\n",
"md(\"Checking to see if incident entities appear in TI data...\")\n",
"\n",
"incident_details[\"TI Severity\"] = severity\n",
"# Output TI hits of high or warning severity\n",
"display(incident_details)\n",
"if (\n",
" incident_details[\"TI Severity\"].iloc[0] == \"High\"\n",
" or incident_details[\"TI Severity\"].iloc[0] == \"Warning\"\n",
" or incident_details[\"TI Severity\"].iloc[0] == \"Information\"\n",
"):\n",
" print(\"Incident:\")\n",
" display(\n",
" incident_details[\n",
" [\n",
" \"properties.createdTimeUtc\",\n",
" \"properties.incidentNumber\",\n",
" \"properties.title\",\n",
" \"properties.status\",\n",
" \"properties.severity\",\n",
" \"TI Severity\",\n",
" ]\n",
" ]\n",
" )\n",
" md(\"TI Results:\", \"bold\")\n",
" display(\n",
" resps[[\"Ioc\", \"IocType\", \"Provider\", \"Severity\", \"Details\"]].sort_values(\n",
" by=\"Severity\"\n",
" )\n",
" )\n",
"else:\n",
" md(\"None of the Entities appeared in TI data\", \"bold\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Get raw events in incident time frame\n",
"\n",
"Now that we have selected an incident to triage we can look at the WAF log events that relate to the incident, along with details of the WAF rule that triggered the incident.\n",
"\n",
"Review the details in the cells below and select a specific event to see further details in the cells below."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def parse_rule_id(row):\n",
" return row[\"ruleName_s\"].split(\"-\")[-1]\n",
"\n",
"\n",
"# Format display of WAF rule details\n",
"def display_event_details(rule_detail):\n",
" details = f\"\"\"\n",
" <h3>Event Type: {rule_detail['details_msg_s']},</h3>\n",
" <b>Time Generated: </b> {rule_detail['TimeGenerated']}<br>\n",
" <b>Rule: </b> {rule_detail['ruleName_s']} <br>\n",
" <b>Details: </b>{rule_detail['details_data_s']} <br>\n",
" <b>Client IP: </b> {rule_detail['clientIP_s']} <br>\n",
" <b>Client Port: </b> {rule_detail['clientPort_s']} <br>\n",
" <b>Socket IP: </b> {rule_detail['socketIP_s']} <br>\n",
" <b>Host: </b> {rule_detail['host_s']}<br>\n",
" \"\"\"\n",
"\n",
" if rule_detail[\"ruleName_s\"].startswith(\"Microsoft_DefaultRuleSet\"):\n",
" for rule in owasp_sqli_rule_set:\n",
" if \"id\" in rule and rule[\"id\"] == rule_detail[\"RuleID\"]:\n",
" owasp_rule = rule\n",
" else:\n",
" owasp_rule = \"Custom Rule, this is not supported by this notebook\"\n",
" return (HTML(details), \"OWASP Rule Details:\", owasp_rule)\n",
"\n",
"\n",
"# Get raw events and parse out the rule ID\n",
"start_time = incidents[incidents[\"id\"] == incident_id].iloc[0][\n",
" \"properties.firstActivityTimeUtc\"\n",
"]\n",
"end_time = incidents[incidents[\"id\"] == incident_id].iloc[0][\n",
" \"properties.lastActivityTimeUtc\"\n",
"]\n",
"rule_query = f\"\"\"AzureDiagnostics\n",
"| where TimeGenerated between(datetime('{start_time}')..datetime('{end_time}'))\n",
"| where ruleName_s contains 'SQLI'\n",
"\"\"\"\n",
"\n",
"raw_events_df = qry_prov.exec_query(rule_query)\n",
"if raw_events_df.empty:\n",
" md(\"Unable to find any events related to this incident.\")\n",
"else:\n",
" rule_details_df = raw_events_df[\n",
" [\n",
" \"TimeGenerated\",\n",
" \"ResourceGroup\",\n",
" \"SubscriptionId\",\n",
" \"policy_s\",\n",
" \"details_msg_s\",\n",
" \"requestUri_s\",\n",
" \"httpStatusCode_d\",\n",
" \"ruleName_s\",\n",
" \"action_s\",\n",
" \"details_data_s\",\n",
" \"clientIP_s\",\n",
" \"host_s\",\n",
" \"socketIP_s\",\n",
" \"clientPort_s\",\n",
" ]\n",
" ].drop_duplicates()\n",
" rule_details_df[\"RuleID\"] = rule_details_df.apply(parse_rule_id, axis=1)\n",
" md(\"WAF rule firing events occurring in the incident timeframe:\", \"bold\")\n",
" display(rule_details_df)\n",
" rule_details_df.mp_plot.timeline(\n",
" title=\"WAF Rule Firing Events\", group_by=\"ruleName_s\"\n",
" )\n",
"\n",
"if isinstance(rule_details_df, pd.DataFrame) and not rule_details_df.empty:\n",
" owasp_sqi_rules_response = httpx.get(\n",
" \"https://raw.githubusercontent.com/SpiderLabs/owasp-modsecurity-crs/v3.2/master/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf\"\n",
" )\n",
" owasp_sqi_rules = [\n",
" x for x in owasp_sqi_rules_response.text.split(\"\\n\") if not x.startswith(\"#\")\n",
" ]\n",
" owasp_sqi_rules_text = \"\".join([str(item) for item in owasp_sqi_rules])\n",
" owasp_sqi_rules_text.split(\"'\\\"\")\n",
" owasp_sqli_rule_set = []\n",
" for rule in owasp_sqi_rules_text.split(\"'\\\"\"):\n",
" rule_details = {}\n",
" tags = []\n",
" for row in rule.split(\"\\\\ \"):\n",
" if row.startswith(\"SecRule \"):\n",
" rule_details[\"rulelogic\"] = row.split(\"SecRule \")[-1]\n",
" elif \":\" in row:\n",
" split_row = row.split(\":\")\n",
" if split_row[0].strip('\"') == \"tag\":\n",
" tags.append(split_row[1].strip('\"'))\n",
" else:\n",
" rule_details[split_row[0].strip('\"')] = (\n",
" split_row[1].strip('\"').strip(\",\")\n",
" )\n",
" rule_details[\"tags\"] = tags\n",
" owasp_sqli_rule_set.append(rule_details)\n",
"\n",
" md(\"Select an WAF Event to triage:\", \"bold\")\n",
"\n",
" rule_details_df[\"full_id\"] = rule_details_df[\"RuleID\"] + rule_details_df[\n",
" \"TimeGenerated\"\n",
" ].astype(str)\n",
"\n",
" event_sel = SelectAlert(\n",
" alerts=rule_details_df,\n",
" columns=[\n",
" \"TimeGenerated\",\n",
" \"ResourceGroup\",\n",
" \"SubscriptionId\",\n",
" \"policy_s\",\n",
" \"details_msg_s\",\n",
" \"requestUri_s\",\n",
" \"httpStatusCode_d\",\n",
" \"ruleName_s\",\n",
" \"action_s\",\n",
" \"details_data_s\",\n",
" \"clientIP_s\",\n",
" \"host_s\",\n",
" \"socketIP_s\",\n",
" \"clientPort_s\",\n",
" \"RuleID\",\n",
" ],\n",
" time_col=\"TimeGenerated\",\n",
" id_col=\"full_id\",\n",
" action=display_event_details,\n",
" )\n",
" event_sel.display()\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Review other events related to this rule\n",
"\n",
"Look at other events associated with the event above to understand the context of this WAF rule and its historical activity."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rule_events_query = f\"\"\"AzureDiagnostics\n",
"| where TimeGenerated between(datetime('{start_time}')..datetime('{end_time}'))\n",
"| where ruleName_s =~ \"{event_sel.value['ruleName_s']}\" or clientIP_s =~ \"{event_sel.value['clientIP_s']}\" or host_s =~ \"{event_sel.value['host_s']}\"\n",
"\"\"\"\n",
"rule_events_df = qry_prov.exec_query(rule_events_query)\n",
"md(f\"Summary of {event_sel.value['ruleName_s']} rule events:\", \"bold\")\n",
"rule_events_df.mp_plot.timeline(\n",
" title=\"Rule Events by Request URI\",\n",
" group_by=\"requestUri_s\",\n",
" source_columns=[\"TimeGenerated\", \"ruleName_s\", \"clientIP_s\", \"host_s\"],\n",
")\n",
"rule_events_df.mp_plot.timeline(\n",
" title=\"Rule Events by Client IP\",\n",
" group_by=\"clientIP_s\",\n",
" source_columns=[\"TimeGenerated\", \"host_s\", \"clientIP_s\", \"requestUri_s\"],\n",
")\n",
"rule_events_df.mp_plot.timeline(\n",
" title=\"Rule Events by Host\",\n",
" group_by=\"host_s\",\n",
" source_columns=[\"TimeGenerated\", \"ruleName_s\", \"clientIP_s\", \"requestUri_s\"],\n",
")\n",
"rule_events_df.mp_plot.timeline(\n",
" title=\"Events by Rule Triggered\",\n",
" group_by=\"ruleName_s\",\n",
" source_columns=[\"TimeGenerated\", \"host_s\", \"clientIP_s\", \"requestUri_s\"],\n",
")\n",
"md(f\"{event_sel.value['ruleName_s']} events:\", \"bold\")\n",
"display(rule_events_df)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Determine the incident status.\n",
"\n",
"Based on the above details determine whether the incident is a False Positive, True Positive or Benign Positive.<br>\n",
"This status will be reflected in the incident within the Sentinel portal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Rule_set_name = event_sel.value[\"ruleName_s\"].split(\"-\")[0]\n",
"Rule_set_version = event_sel.value[\"ruleName_s\"].split(\"-\")[1]\n",
"Rule_set_type = event_sel.value[\"ruleName_s\"].split(\"-\")[2]\n",
"Rule_set_id = event_sel.value[\"ruleName_s\"].split(\"-\")[3]\n",
"sub_id = event_sel.value[\"SubscriptionId\"]\n",
"policy_name = event_sel.value[\"policy_s\"]\n",
"rg_name = event_sel.value[\"ResourceGroup\"]\n",
"\n",
"incident_status = widgets.Dropdown(\n",
" options=[\"True Positive\", \"False Positive\", \"Benign Positive\"],\n",
" description=\"Status:\",\n",
" disabled=False,\n",
")\n",
"print(\"What is the determined status of this incident?\")\n",
"incident_status\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if incident_status.value in [\"True Positive\", \"Benign Positive\"]:\n",
" sent_prov.update_incident(\n",
" alert_sel.selected_alert.id.split(\"/\")[-1],\n",
" update_items={\"severity\": \"High\", \"status\": \"Active\"},\n",
" )\n",
" sent_prov.post_comment(\n",
" alert_sel.selected_alert.id.split(\"/\")[-1],\n",
" comment=f\"Incident triaged in notebook, determined to be a {incident_status.value} event.\",\n",
" )\n",
"elif incident_status.value == \"False Positive\" and not Rule_set_name.startswith(\n",
" \"Microsoft_\"\n",
"):\n",
" md(\"Updating non-Default rule-sets is not supported in this notebook currently\")\n",
"else:\n",
" md(\n",
" \"If this is a False Positive use the cells below to add additional exclusions to your WAF policy\"\n",
" )\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"If the above incident is determined to be a false positive you can add exclusions to the WAF rule-set to prevent further alerts.<br>\n",
"\n",
"These exclusions are applied at the WAF level and can prevent future WAF blocks based on set parameters.<br>\n",
"More details of WAF exclusions can be found here: https://learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-exclusion\n",
"\n",
"\n",
"Use the cells below to review the currently deployed policy and define and deploy the exclusions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if incident_status.value == \"False Positive\":\n",
" if Rule_set_name != \"Microsoft_DefaultRuleSet\":\n",
" raise MsticpyException(\n",
" \"Custom rule exclusions are not supported in this notebook\"\n",
" )\n",
" api_url = f\"https://management.azure.com/subscriptions/{sub_id}/resourceGroups/{rg_name}/providers/Microsoft.Network/FrontDoorWebApplicationFirewallPolicies/{policy_name}?api-version=2020-11-01\"\n",
" headers = {\n",
" \"Authorization\": f\"Bearer {sent_prov.token}\",\n",
" \"Content-Type\": \"application/json\",\n",
" }\n",
" api_response = httpx.get(api_url, headers=headers)\n",
" policy_props = dict(\n",
" (k, api_response.json()[k])\n",
" for k in (\"tags\", \"sku\", \"properties\", \"etag\", \"location\")\n",
" if k in api_response.json()\n",
" )\n",
" prop_props = policy_props[\"properties\"]\n",
" policy_props[\"properties\"] = dict(\n",
" (k, prop_props[k])\n",
" for k in (\"customRules\", \"managedRules\", \"policySettings\")\n",
" if k in prop_props\n",
" )\n",
" md(\"Current policy configuration: \", \"bold\")\n",
" print(json.dumps(policy_props, indent=4))\n",
"else:\n",
" md(\"No policy updates required for True Positive or Benign Positive events\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Select the number of exclusions that you want to add to the WAF rule:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if incident_status.value == \"False Positive\":\n",
" number_exclusions = widgets.Dropdown(\n",
" options=[1, 2, 3, 4, 5], description=\"Number of exclusions\", disabled=False\n",
" )\n",
" display(number_exclusions)\n",
"else:\n",
" md(\"No policy updates required for True Positive or Benign Positive events\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Define the exclusions you want to apply to the rule.<br>\n",
"Ref: https://learn.microsoft.com/en-us/azure/web-application-firewall/afds/waf-front-door-exclusion"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if incident_status.value == \"False Positive\":\n",
" exclusion_widgets = {}\n",
" for i in range(number_exclusions.value):\n",
" variable_sel = widgets.Dropdown(\n",
" options=[\n",
" \"QueryStringArgNames\",\n",
" \"RequestBodyJsonArgNames\",\n",
" \"RequestBodyPostArgNames\",\n",
" \"RequestCookieNames\",\n",
" \"RequestHeaderNames\",\n",
" ],\n",
" description=\"Match Variable:\",\n",
" disabled=False,\n",
" )\n",
"\n",
" operator_sel = widgets.Dropdown(\n",
" options=[\"Contains\", \"EndsWith\", \"Equals\", \"EqualsAny\", \"StartsWith\"],\n",
" description=\"Operator:\",\n",
" disabled=False,\n",
" )\n",
"\n",
" value_sel = widgets.Text(description=\"Selector:\", disabled=False)\n",
" exclusion_widgets[i] = {\n",
" \"variable_sel\": variable_sel,\n",
" \"operator_sel\": operator_sel,\n",
" \"value_sel\": value_sel,\n",
" }\n",
"\n",
" for widg in exclusion_widgets:\n",
" md(f\"Exclusion {widg+1}:\", \"bold\")\n",
" display(exclusion_widgets[widg][\"variable_sel\"])\n",
" display(exclusion_widgets[widg][\"operator_sel\"])\n",
" display(exclusion_widgets[widg][\"value_sel\"])\n",
"else:\n",
" md(\"No policy updates required for True Positive or Benign Positive events\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The cell below takes the new exclusions defined above and adds them to the currently set exclusions."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Remove un-needed None values from policy\n",
"def clean_nones(value):\n",
" if isinstance(value, list):\n",
" return [clean_nones(x) for x in value if x is not None]\n",
" elif isinstance(value, dict):\n",
" return {key: clean_nones(val) for key, val in value.items() if val is not None}\n",
" else:\n",
" return value\n",
"\n",
"\n",
"def bool_to_string(value):\n",
" if isinstance(value, bool):\n",
" return str(value).lower()\n",
"\n",
"\n",
"if incident_status.value == \"False Positive\":\n",
" policy_props_backup = policy_props\n",
"\n",
" # Build new exclusions for widgets\n",
" new_exclusions = []\n",
" for widg in exclusion_widgets:\n",
" new_exclusions.append(\n",
" {\n",
" \"matchVariable\": f'{exclusion_widgets[widg][\"variable_sel\"].value}',\n",
" \"selectorMatchOperator\": f'{exclusion_widgets[widg][\"operator_sel\"].value}',\n",
" \"selector\": f'{exclusion_widgets[widg][\"value_sel\"].value}',\n",
" }\n",
" )\n",
" modified_rule_set = None\n",
" override_set = []\n",
" new_rules = []\n",
" # Get existing ruleset\n",
" new_rule_set = []\n",
" for rule_set in policy_props[\"properties\"][\"managedRules\"][\"managedRuleSets\"]:\n",
" if rule_set[\"ruleSetType\"] == Rule_set_name:\n",
" modified_rule_set = rule_set\n",
" else:\n",
" new_rule_set.append(rule_set)\n",
"\n",
" exclusion_exists = False\n",
" for override in modified_rule_set[\"ruleGroupOverrides\"]:\n",
" if override[\"ruleGroupName\"] == \"SQLI\":\n",
" exclusion_exists = True\n",
" rule_ids = [rule[\"ruleId\"] for rule in override[\"rules\"]]\n",
" if Rule_set_id in rule_ids:\n",
" for rule in override[\"rules\"]:\n",
" if rule[\"ruleId\"] == f\"{Rule_set_id}\":\n",
" rule[\"exclusions\"] += new_exclusions\n",
" new_rules.append(rule)\n",
" else:\n",
" new_rules = override[\"rules\"] + [\n",
" {\n",
" \"ruleId\": f\"{Rule_set_id}\",\n",
" \"enabledState\": \"Enabled\",\n",
" \"action\": \"AnomalyScoring\",\n",
" \"exclusions\": new_exclusions,\n",
" }\n",
" ]\n",
" override[\"rules\"] = new_rules\n",
" override_set.append(override)\n",
"\n",
" if not exclusion_exists:\n",
" modified_rule_set[\"ruleGroupOverrides\"] = [\n",
" {\n",
" \"ruleGroupName\": \"SQLI\",\n",
" \"rules\": [\n",
" {\n",
" \"ruleId\": f\"{Rule_set_id}\",\n",
" \"enabledState\": \"Enabled\",\n",
" \"action\": \"AnomalyScoring\",\n",
" \"exclusions\": new_exclusions,\n",
" }\n",
" ],\n",
" }\n",
" ]\n",
"\n",
" if modified_rule_set:\n",
" if override_set:\n",
" # Remove the existing SQLI rules and replace with our modified set\n",
" modified_rule_set[\"ruleGroupOverrides\"] = override_set\n",
" new_rule_set.append(modified_rule_set)\n",
"\n",
" new_props = policy_props\n",
" new_props[\"properties\"][\"managedRules\"][\"managedRuleSets\"] = new_rule_set\n",
" new_props = clean_nones(new_props)\n",
"\n",
" # Add check that all previous policies still exist in json before applying\n",
" if set(\n",
" [\n",
" ruleset[\"ruleSetType\"]\n",
" for ruleset in new_props[\"properties\"][\"managedRules\"][\"managedRuleSets\"]\n",
" ]\n",
" ) != set(\n",
" [\n",
" existing_ruleset[\"ruleSetType\"]\n",
" for existing_ruleset in policy_props_backup[\"properties\"][\"managedRules\"][\n",
" \"managedRuleSets\"\n",
" ]\n",
" ]\n",
" ):\n",
" raise Exception(\n",
" \"An issue has occurred and one of the existing rulesets has been removed. To prevent accidental deletion of a ruleset the update has been prevent. Please re-run this notebook and try again.\"\n",
" )\n",
"\n",
" new_props = json.dumps(new_props, default=bool_to_string)\n",
" # Apply policy via API\n",
" api_update_response = httpx.put(api_url, headers=headers, content=new_props)\n",
" if api_update_response.status_code in (200, 201, 202):\n",
" md(\"Exclusions applied\")\n",
" else:\n",
" md(\n",
" f\"There was a problem updating the exclusions status code: {api_update_response.status_code}. Please try adding the exclusions via the Azure Portal.\"\n",
" )\n",
"else:\n",
" md(\"No policy updates required for True Positive or Benign Positive events\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Review Updated Exclusion Rules\n",
"Below you can see the exclusion rules newly applied to validate they are as expected."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if incident_status.value == \"False Positive\":\n",
" updated_rule_api_response = httpx.get(api_url, headers=headers)\n",
" print(updated_rule_api_response.json())\n",
"else:\n",
" md(\"No policy updates required for True Positive or Benign Positive events\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Update Incident\n",
"Now the exclusions have been put in place we can update the incident in Microsoft Sentinel to reflect this."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"if incident_status.value == \"False Positive\":\n",
" sent_prov.update_incident(\n",
" alert_sel.selected_alert.id.split(\"/\")[-1], update_items={\"severity\": \"Low\"}\n",
" )\n",
" sent_prov.post_comment(\n",
" alert_sel.selected_alert.id.split(\"/\")[-1],\n",
" comment=\"Incident triaged in notebook, WAF policy updated with exclusions.\",\n",
" )\n",
"else:\n",
" md(\"No policy updates required for True Positive or Benign Positive events\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Appendix\n",
"\n",
"## Configuration\n",
"\n",
"### `msticpyconfig.yaml` configuration File\n",
"You can configure primary and secondary TI providers and any required parameters in the `msticpyconfig.yaml` file. This is read from the current directory or you can set an environment variable (`MSTICPYCONFIG`) pointing to its location.\n",
"\n",
"To configure this file see the [ConfigureNotebookEnvironment notebook](https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb)"
]
}
],
"metadata": {
"kernel_info": {
"name": "python310-sdkv2"
},
"kernelspec": {
"display_name": "Python 3.10 - SDK v2",
"language": "python",
"name": "python310-sdkv2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}